2025 AI Infra Summit
Watch the videos below and check out our recent reports by grabbing a bundle of our latest 2024-2025 reports (SASE, Carrier and Campus NaaS, AI in Networking) plus the LATEST edition of our Data Center Networking for AI and Cloud report (likely of interest to you if you were at Fiber Connect). The reports are UNLOCKED — no personal information needed to download.
Thanks to our sponsors for supporting our site!
2025 AI Infra Summit












Content from Thought Leaders at AI Infra Summit 2025
Views expressed are those of the presenting individuals and companies and may not necessarily represent views of Converge! Network Digest or AvidThink.

Aviz on Scaling AI Workloads with Open Networks
Vishal Shukla, Founder and CEO of Aviz Networks, leads the company’s development of network management solutions that enhance GPU resource allocation and monitoring across multi-tenant environments. Through their collaboration with Nvidia on the Spectrum X platform, Aviz Networks delivers comprehensive system deployment capabilities and analytics for both Nvidia and AMD GPU infrastructures in SONiC-based networks.

Ayar Labs CTO on Optical I/O for AI Networks
Vladimir Stojanovic, CTO & Co-Founder of Ayar Labs, outlines the company’s optical IO technology that addresses GPU scaling challenges in AI inference workloads by enabling connectivity across multi-rack GPU clusters. The solution achieves 10-20x better performance per watt while supporting direct extended memory connections, marking a significant advance over traditional copper IO approaches.

Broadcom’s Ram Velaga on Scaling AI Networks
Ram Velaga, GM and SVP of Core Switching Group at Broadcom, discusses how networking must evolve to support expanding AI infrastructure and machine learning systems across multiple scaling domains in data centers. He explains why Ethernet technology stands out as the optimal networking solution for AI infrastructure, highlighting its clean interface, proven reliability, and consistent bandwidth improvements that align with industry needs.

Cornelis CEO on Scalability for AI Networking
At AI Infra 2025, Lisa Spelman, CEO of Cornelis Networks, shared insights on GPU utilization challenges, noting that 30% of GPU time is lost waiting for communications. Cornelis Networks addresses these inefficiencies through networking solutions that enable scaling up to 500,000 endpoints without hitting performance limitations.

d-Matrix on AI Efficiency and Scale
Sid Sheth, CEO of d-Matrix, introduces their Jetream IO accelerator product that works alongside their Corsair compute accelerator to enable multi-node AI workload scaling. The solution uses PCI cards supporting up to eight Corsair accelerators per node, with the Jetream IO accelerator enabling cross-node scaling through an Ethernet switch.

DriveNets on Scaling AI Networks
Dudy Cohen, VP, Product Marketing at DriveNets, outlines three essential approaches to AI infrastructure scaling: scale up, scale out, and scale across, demonstrating how modern networks can support up to 576 GPUs in single clusters. His analysis shows how scale out networking enables unlimited GPU scalability through fabric scheduled architectures, while scale across solutions allow organizations to manage distributed GPU resources across multiple data centers as one unified system.

Hedgehog on Open Networking for AI Data Centers
Marc Austin, CEO of Hedgehog, showcases the company’s open-source AI networking software that delivers superior performance on whitebox switches compared to traditional solutions at significantly lower costs. The solution enables automated network operations while supporting diverse AI accelerators beyond Nvidia, including offerings from AWS and AMD.

Lightmatter on Breaking the AI Interconnect Bottleneck
Steve Klinger, VP Product at Lightmatter, showcases their Passage photonic technology that tackles AI computing bottlenecks by enhancing chip connectivity and bandwidth capabilities. The solution enables high-speed data transmission through silicon photonics and provides seamless integration with existing data center infrastructure through their L series offering.

RISC-V for AI
John Simpson, Senior Principal Architect at SiFive, showcases RISC-5’s vector length agnostic instructions that enable consistent software execution across data centers and edge devices at the AI Infra Summit. He introduces SiFive’s X100 series featuring scalar co-processing, hardware pipeline exponential instruction, and memory system improvements that deliver up to 322x performance gains over previous generations.

Xscape Photonics on Scaling AI with Light
Vivek Raghunathan, Co-Founder and CEO of Xscape Photonics, is developing silicon photonics-based laser solutions to address bandwidth constraints in AI computing systems, where GPU-to-memory bandwidth significantly outpaces package-to-package communication. The company’s technology adapts wavelength division multiplexing to create scalable lasers generating multiple wavelengths on silicon chips, with initial products targeting AI fabric and accelerator vendors.

UALink - Accelerating AI Interconnect Innovation
Kurtis Bowman, Chairman of UALink Consortium, outlines how their open standard technology meets scaling requirements with over 110 members implementing specifications in switches and accelerators. The technology builds on existing Ethernet infrastructure while enabling companies to maintain their focus, with major tech firms like AWS, Google, Meta, and Intel actively deploying UALink solutions in their data centers.

Marvell on Custom HBM & SRAM for AI Chips
Mark Kuemerle, VP of Technology and CTO of ASIC Business Unit at Marvell, presented key memory optimization strategies at the AI Infrastructure Summit, focusing on embedded SRAM IP, custom HBM, and the Striker memory aggregation device. These innovations from Marvell enhance data center performance by improving bandwidth and reducing latency across multiple memory types.

Building Blocks for AI Processing
Mark Kuemerle, VP of Technology and CTO of ASIC Business Unit at Marvell, presented the company’s new die-to-die interface technology at the AI Infrastructure Summit, highlighting its ability to triple bandwidth density while reducing power consumption by 40-70%. The innovative IP block enhances interconnect capabilities between dies and custom HBM, marking a major step forward in data center AI system development.

NeuReality’s 1.6T AI NIC — Moshe Tanach
Moshe Tanach, Co-Founder & CEO of NeuReality, unveiled their upcoming 1.6 terabyte NIC product that enhances GPU efficiency in AI workloads by improving active time from 16% to nearly 80%. The new NIC, set for release in late 2024, features in-network compute capabilities and Ultra Ethernet support, providing an alternative to InfiniBand while maintaining low latency performance.